Goto

Collaborating Authors

 ground rule



ASP-based Multi-shot Reasoning via DLV2 with Incremental Grounding

Calimeri, Francesco, Ianni, Giovambattista, Pacenza, Francesco, Perri, Simona, Zangari, Jessica

arXiv.org Artificial Intelligence

DLV2 is an AI tool for Knowledge Representation and Reasoning which supports Answer Set Programming (ASP) - a logic-based declarative formalism, successfully used in both academic and industrial applications. Given a logic program modelling a computational problem, an execution of DLV2 produces the so-called answer sets that correspond one-to-one to the solutions to the problem at hand. The computational process of DLV2 relies on the typical Ground & Solve approach where the grounding step transforms the input program into a new, equivalent ground program, and the subsequent solving step applies propositional algorithms to search for the answer sets. Recently, emerging applications in contexts such as stream reasoning and event processing created a demand for multi-shot reasoning: here, the system is expected to be reactive while repeatedly executed over rapidly changing data. In this work, we present a new incremental reasoner obtained from the evolution of DLV2 towards iterated reasoning. Rather than restarting the computation from scratch, the system remains alive across repeated shots, and it incrementally handles the internal grounding process. At each shot, the system reuses previous computations for building and maintaining a large, more general ground program, from which a smaller yet equivalent portion is determined and used for computing answer sets. Notably, the incremental process is performed in a completely transparent fashion for the user. We describe the system, its usage, its applicability and performance in some practically relevant domains. Under consideration in Theory and Practice of Logic Programming (TPLP).


Estimating Accuracy from Unlabeled Data: A Probabilistic Logic Approach

Emmanouil Platanios, Hoifung Poon, Tom M. Mitchell, Eric J. Horvitz

Neural Information Processing Systems

We propose an efficient method to estimate the accuracy of classifiers using only unlabeled data. We consider a setting with multiple classification problems where the target classes may be tied together through logical constraints. For example, a set of classes may be mutually exclusive, meaning that a data instance can belong to at most one of them. The proposed method is based on the intuition that: (i) when classifiers agree, they are more likely to be correct, and (ii) when the classifiers make a prediction that violates the constraints, at least one classifier must be making an error. Experiments on four real-world data sets produce accuracy estimates within a few percent of the true accuracy, using solely unlabeled data. Our models also outperform existing state-of-the-art solutions in both estimating accuracies, and combining multiple classifier outputs. The results emphasize the utility of logical constraints in estimating accuracy, thus validating our intuition.


Neural Probabilistic Logic Learning for Knowledge Graph Reasoning

Sun, Fengsong, Wang, Jinyu, Wei, Zhiqing, Zhang, Xianchao

arXiv.org Artificial Intelligence

Knowledge graph (KG) reasoning is a task that aims to predict unknown facts based on known factual samples. Reasoning methods can be divided into two categories: rule-based methods and KG-embedding based methods. The former possesses precise reasoning capabilities but finds it challenging to reason efficiently over large-scale knowledge graphs. While gaining the ability to reason over large-scale knowledge graphs, the latter sacrifices reasoning accuracy. This paper aims to design a reasoning framework called Neural Probabilistic Logic Learning(NPLL) that achieves accurate reasoning on knowledge graphs. Our approach introduces a scoring module that effectively enhances the expressive power of embedding networks, striking a balance between model simplicity and reasoning capabilities. We improve the interpretability of the model by incorporating a Markov Logic Network based on variational inference. We empirically evaluate our approach on several benchmark datasets, and the experimental results validate that our method substantially enhances the accuracy and quality of the reasoning results.


Ocassionally Secure: A Comparative Analysis of Code Generation Assistants

Elgedawy, Ran, Sadik, John, Dutta, Senjuti, Gautam, Anuj, Georgiou, Konstantinos, Gholamrezae, Farzin, Ji, Fujiao, Lim, Kyungchan, Liu, Qian, Ruoti, Scott

arXiv.org Artificial Intelligence

$ $Large Language Models (LLMs) are being increasingly utilized in various applications, with code generations being a notable example. While previous research has shown that LLMs have the capability to generate both secure and insecure code, the literature does not take into account what factors help generate secure and effective code. Therefore in this paper we focus on identifying and understanding the conditions and contexts in which LLMs can be effectively and safely deployed in real-world scenarios to generate quality code. We conducted a comparative analysis of four advanced LLMs--GPT-3.5 and GPT-4 using ChatGPT and Bard and Gemini from Google--using 9 separate tasks to assess each model's code generation capabilities. We contextualized our study to represent the typical use cases of a real-life developer employing LLMs for everyday tasks as work. Additionally, we place an emphasis on security awareness which is represented through the use of two distinct versions of our developer persona. In total, we collected 61 code outputs and analyzed them across several aspects: functionality, security, performance, complexity, and reliability. These insights are crucial for understanding the models' capabilities and limitations, guiding future development and practical applications in the field of automated code generation.


Google and the European Commission will collaborate on AI ground rules

Engadget

The world's governments have taken note of generative AI's potential for massive disruption and are acting accordingly. European Commission (EC) industry chief Thierry Breton said Wednesday that it would work with Alphabet on a voluntary pact to establish artificial intelligence ground rules, according to Reuters. Breton met with Google CEO Sundar Pichai in Brussels to discuss the arrangement, which will include input from companies based in Europe and other regions. The EU has a history of enacting strict technology rules, and the alliance gives Google a chance to provide input while steering clear of trouble down the road. The compact aims to set up guidelines ahead of official legislation like the EU's proposed AI Act, which will take much longer to develop and enact.


Is it too late to regulate AI to keep it from outsmarting the human race?

FOX News

Scammers are texting victims and stealing their information by posing as legitimate businesses or agencies. CyberGuy explains how to stay safe. Remember the good ol' days when our biggest worry was accidentally pocket-dialing someone? Well, times have changed, and so has technology. We now have these nifty AI systems that can do everything from making restaurant reservations to driving our cars.


Tools such as ChatGPT threaten transparent science; here are our ground rules for their use

#artificialintelligence

It has been clear for several years that artificial intelligence (AI) is gaining the ability to generate fluent language, churning out sentences that are increasingly hard to distinguish from text written by people. Last year, Nature reported that some scientists were already using chatbots as research assistants -- to help organize their thinking, generate feedback on their work, assist with writing code and summarize research literature (Nature 611, 192–193; 2022). But the release of the AI chatbot ChatGPT in November has brought the capabilities of such tools, known as large language models (LLMs), to a mass audience. Its developers, OpenAI in San Francisco, California, have made the chatbot free to use and easily accessible for people who don't have technical expertise. Millions are using it, and the result has been an explosion of fun and sometimes frightening writing experiments that have turbocharged the growing excitement and consternation about these tools.


Alexa, What Does It Take to Be Human?

#artificialintelligence

Mattel pulled a much-anticipated and hotly-debated toy recently. Aristotle, a device geared for children anywhere from infancy to adolescence, was set up to be the kid's version of Alexa. It boasted features such as the ability to soothe a crying baby, teach ABCs, reinforce good manners, play interactive games, and help kids with homework. Marketed as an "all-in-one nursery necessity" on Mattel's website, it also offered e-commerce functionality that would enable Aristotle to automatically reorder baby products based on user feedback. This little gadget would be the next big thing, engineered to "comfort, entertain, teach, and assist during each development state – evolving with a child as their needs change."


NIST sets AI ground rules for agencies without 'stifling innovation' Federal News Network

#artificialintelligence

Best listening experience is on Chrome, Firefox or Safari. As agencies continue to experiment with artificial intelligence as a tool to transform the way they do business, the National Institute of Standards and Technology has set a roadmap for the government's role in developing future AI breakthroughs. After months of feedback from industry and elsewhere in government, as well as an in-person workshop in May, NIST has laid down some ground rules of what agencies should and shouldn't do with AI tools going forward. NIST's plan marks the federal government's first major effort to provide clarity and guidance to agencies looking to adopt a technology that, while buzzworthy now, actually dates back to the 1960s, yet still remains in its infancy. Insight by Trezza Media Group: Labor Department, U.S. Marshals Service, SBA and VA address IT modernization in this free webinar.